56 research outputs found

    Older Adults’ Deployment of ‘Distrust’

    Get PDF
    Older adults frequently deploy the concept of distrust when discussing digital technologies, and it is tempting to assume that distrust is largely responsible for the reduced uptake by older adults witnessed in the latest surveys of technology use. To help understand the impact of distrust on adoption behavior, we conducted focus groups with older adults exploring how, in what circumstances, and to what effect older adults articulate distrust in digital technologies. Our findings indicate that distrust is not especially relevant to older adults’ practical decision making around technology (non-)use. The older adults in our study used the language of distrust to open up discussions around digital technologies to larger issues related to values. This suggests that looking to distrust as a predictor of non-use (e.g. in Technology Acceptance Model studies) may be uniquely unhelpful in the case of older adults, as it narrows the discussion of technology acceptance and trust to interactional issues, when their use of distrust pertains to much wider concerns. Likewise, technology adoption should not be viewed as indicative of trust or an endorsement of technology acceptability. Older adults using-while-distrusting offers important insights into how to design truly acceptable digital technologies

    "Convince us'': an argument for the morality of persuasion

    Get PDF
    This paper explores the dierence between 'persuasion' and 'manipulation', both of which are instantiated in persuasive technologies to date. We present a case study of the system we are currently developing to foster local spending behavior by a community group | with sensitive implications for the community's sense of identity | and contrast our approach with what we would understand to be a manipulative approach. Our intention is to a) respond to anticipated critique that such a system could be interpreted as manipulative, b) present our argument for how persuasive technologies can be persuasive without being manipulative, and c) explain why, for this case study, its important that our approach be persuasive

    Humble Machines:Attending to the Underappreciated Costs of Misplaced Distrust

    Get PDF
    It is curious that AI increasingly outperforms human decision makers, yet much of the public distrusts AI to make decisions affecting their lives. In this paper we explore a novel theory that may explain one reason for this. We propose that public distrust of AI is a moral consequence of designing systems that prioritize reduction of costs of false positives over less tangible costs of false negatives. We show that such systems, which we characterize as 'distrustful', are more likely to miscategorize trustworthy individuals, with cascading consequences to both those individuals and the overall human-AI trust relationship. Ultimately, we argue that public distrust of AI stems from well-founded concern about the potential of being miscategorized. We propose that restoring public trust in AI will require that systems are designed to embody a stance of 'humble trust', whereby the moral costs of the misplaced distrust associated with false negatives is weighted appropriately during development and use

    Exploring sustainability research in computing:where we are and where we go next

    Get PDF
    This paper develops a holistic framework of questions mo- tivating sustainability research in computing in order to en- able new opportunities for critique. Analysis of systemat- ically selected corpora of computing publications demon- strates that several of these question areas are well covered, while others are ripe for further exploration. It also pro- vides insight into which of these questions tend to be ad- dressed by different communities within sustainable com- puting. The framework itself reveals discursive similarities between other existing environmental discourses, enabling reflection and participation with the broader sustainability debate. It is argued that the current computing discourse on sustainability is reformist and premised in a Triple Bottom Line construction of sustainability, and a radical, Quadruple Bottom Line alternative is explored as a new vista for com- puting research

    Revealing flows in the local economy through visualisations:customers, clicks/cliques and clusters

    Get PDF
    It is well known by now, that the world has suffered an economic downturn. This has led many governments and organisations to invest resources into researching varying strategies to combat such problem. For some time now, governments have been promoting growth by encouraging local spending; we have witnessed this through ?shop local? campaigns and local currencies. We introduce BARTER a moBile sociAl netwoRking supporTing local Ethical tRading system to tackle this issue, at it?s core an information system that encompasses technology, social media and business analytics are brought together to engage customers, traders and citizens to spend locally by featuring the intrinsic and extrinsic motivations of trading local. After situating BARTER at the heart of the community (with varying traders in and around Lancaster, UK) for some time, this paper is a follow on from a ?BARTER Visualisations? design concept, reporting on the progression and recent developments in the project. Whilst these systems are in place within the community, further research is being conducted to evaluate if revealing and transforming transaction data in a playful and informative manner will help citizens better understand the flow of money in the local economy

    The Sanction of Authority:Promoting Public Trust in AI

    Get PDF
    Trusted AI literature to date has focused on the trust needs of users who knowingly interact with discrete AIs. Conspicuously absent from the literature is a rigorous treatment of public trust in AI. We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society. Drawing from structuration theory and literature on institutional trust, we offer a model of public trust in AI that differs starkly from models driving Trusted AI efforts. We describe the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective, and outline a number of actions that would promote public trust in AI. We discuss how existing efforts to develop AI documentation within organizations---both to inform potential adopters of AI components and support the deliberations of risk and ethics review boards---is necessary but insufficient assurance of the trustworthiness of AI. We argue that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of our society

    Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust

    Full text link
    It is curious that AI increasingly outperforms human decision makers, yet much of the public distrusts AI to make decisions affecting their lives. In this paper we explore a novel theory that may explain one reason for this. We propose that public distrust of AI is a moral consequence of designing systems that prioritize reduction of costs of false positives over less tangible costs of false negatives. We show that such systems, which we characterize as 'distrustful', are more likely to miscategorize trustworthy individuals, with cascading consequences to both those individuals and the overall human-AI trust relationship. Ultimately, we argue that public distrust of AI stems from well-founded concern about the potential of being miscategorized. We propose that restoring public trust in AI will require that systems are designed to embody a stance of 'humble trust', whereby the moral costs of the misplaced distrust associated with false negatives is weighted appropriately during development and use

    The Wisdom of Older Technology (Non-)Users

    Get PDF
    Older adults consistently reject digital technology even when designed to be accessible and trustworthy

    Fifty Shades of Grey:In Praise of a Nuanced Approach Towards Trustworthy Design

    Get PDF
    Environmental data science is uniquely placed to respond to essentially complex and fantastically worthy challenges related to arresting planetary destruction. Trust is needed for facilitating collaboration between scientists who may share datasets and algorithms, and for crafting appropriate science-based policies. Achieving this trust is particularly challenging because of the numerous complexities, multi-scale variables, interdependencies and multi-level uncertainties inherent in environmental data science. Virtual Labs---easily accessible online environments provisioning access to datasets, analysis and visualisations---are socio-technical systems which, if carefully designed, might address these challenges and promote trust in a variety of ways. In addition to various system properties that can be utilised in support of effective collaboration, certain features which are commonly seen to benefit trust---transparency and provenance in particular---appear applicable to promoting trust in and through Virtual Labs. Attempting to realise these features in their design reveals, however, that their implementation is more nuanced and complex than it would appear. Using the lens of affordances, we argue for the need to carefully articulate these features, with consideration of multiple stakeholder needs on balance, so that these Virtual Labs do in fact promote trust. We argue that these features not be conceived as widgets that can be imported into a given context to promote trust; rather, whether they promote trust is a function of how systematically designers consider various (potentially conflicting) stakeholder trust needs

    The Alchemy of Trust:The Creative Act of Designing Trustworthy Socio-Technical Systems

    Get PDF
    Trust is recognised as a significant and valuable component of socio-technical systems, facilitating numerous important benefits. Many trust models have been created throughout various streams of literature, describing trust for different stakeholders in different contexts. However, when designing a system with multiple stakeholders in their multiple contexts, how does one decide which trust model(s) to apply? And furthermore, how does one go from selecting a model or models to translating those into design? We review and analyse two prominent trust models, and apply them to the design of a trustworthy socio-technical system, namely virtual research environments. We show that a singular model cannot easily be imported and directly implemented into the design of such a system. We introduce the concept of alchemy as the most apt characterization of a successful design process, illustrating the need for designers to engage with the richness of the trust landscape and creatively experiment with components from multiple models to create the perfect blend for their context. We provide a demonstrative case study illustrating the process through which designers of socio-technical systems can become alchemists of trust
    • …
    corecore